364 research outputs found

    Identifying and exploiting concurrency in object-based real-time systems

    Get PDF
    The use of object-based mechanisms, i.e., abstract data types (ADTs), for constructing software systems can help to decrease development costs, increase understandability and increase maintainability. However, execution efficiency may be sacrificed due to the large number of procedure calls, and due to contention for shared ADTs in concurrent systems. Such inefficiencies are a concern in real-time applications that have stringent timing requirements. To address these issues, the potentially inefficient procedure calls are turned into a source of concurrency via asynchronous procedure calls (ARPCs), and contention for shared ADTS is reduced via ADT cloning. A framework for concurrency analysis in object-based systems is developed, and compiler techniques for identifying potential concurrency via ARPCs and cloning are introduced. Exploitation of the parallelizing compiler techniques is illustrated in the context of an incremental schedule construction algorithm that enhances concurrency incrementally so that feasible real-time schedules can be constructed. Experimental results show large speedup gains with these techniques. Additionally, experiments show that the concurrency enhancement techniques are often useful in constructing feasible schedules for hard real-time systems

    CHSMiner: a GUI tool to identify chromosomal homologous segments

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The identification of chromosomal homologous segments (CHS) within and between genomes is essential for comparative genomics. Various processes including insertion/deletion and inversion could cause the degeneration of CHSs.</p> <p>Results</p> <p>Here we present a Java software CHSMiner that detects CHSs based on shared gene content alone. It implements fast greedy search algorithm and rigorous statistical validation, and its friendly graphical interface allows interactive visualization of the results. We tested the software on both simulated and biological realistic data and compared its performance with similar existing software and data source.</p> <p>Conclusion</p> <p>CHSMiner is characterized by its integrated workflow, fast speed and convenient usage. It will be useful for both experimentalists and bioinformaticians interested in the structure and evolution of genomes.</p

    Research on a monitoring terminal for a fibre grating sensing device based on Android

    Get PDF
    AbstractAccording to the actual needs of FBG sensing instruments in terms of intelligent terminals, software for FBG sensing monitoring systems is designed based on the current mainstream Android operating system, which runs on 3G mobile phones. The software is used to remotely access and manage a fibre optic sensing device. The use of the intelligent terminal software will enhance the level of safety monitoring instrument intelligence so that the management of the monitoring system will be more flexible, system maintenance will be more convenient, and the reliability and security of monitoring equipment can also improved; i.e., it provides good value for the optical fibre sensing application

    Research on the Post Occupancy Evaluation of Green Public Building Environmental Performance Combined with Carbon Emissions Accounting

    Get PDF
    AbstractThe development of green building in China has reached a new stage, needs to turn to the total energy consumption control from the technology control[1]. We should avoid packing technologies in green building projects and regard achieving good environmental performance as the fundamental goal. In this paper, we use the method of post-occupancy evaluation and regard the building environmental performance as the core of the evaluation system, in order to reduce the influence on the accuracy of results from the measures evaluation. We establish the evaluation index system of green public building environmental performance in severe cold and cold regions, including the index of building life-cycle carbon emissions accounting. And we set up the application plan of index and the scoring method, then we put forward a kind of evaluation grade based on environmental performance level, finally proposed the POE System of Green Public Building Environmental Performance in Severe Cold and Cold Regions (POE-GPBEPC)

    Model Suggests Potential for Porites Coral Population Recovery After Removal of Anthropogenic Disturbance (Luhuitou, Hainan, South China Sea)

    Get PDF
    Population models are important for resource management and can inform about potential trajectories useful for planning purposes, even with incomplete monitoring data. From size frequency data on Luhuitou fringing reef, Hainan, South China Sea, a matrix population model of massive corals (Porites lutea) was developed and trajectories over 100 years under no disturbance and random disturbances were projected. The model reflects a largely open population of Porites lutea, with low local recruitment and preponderance of imported recruitment. Under no further disturbance, the population of Porites lutea will grow and its size structure will change from predominance of small size classes to large size classes. Therewith, total Porites cover will increase. Even under random disturbances every 10 to 20 years, the Porites population could remain viable, albeit at lower space cover. The models suggest recovery at Luhuitou following the removal of chronic anthropogenic disturbance. Extending the area of coral reef reserves to protect the open coral community and the path of connectivity is advisable and imperative for the conservation of Hainan’s coral reefs

    GEOGLE: context mining tool for the correlation between gene expression and the phenotypic distinction

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In the post-genomic era, the development of high-throughput gene expression detection technology provides huge amounts of experimental data, which challenges the traditional pipelines for data processing and analyzing in scientific researches.</p> <p>Results</p> <p>In our work, we integrated gene expression information from Gene Expression Omnibus (GEO), biomedical ontology from Medical Subject Headings (MeSH) and signaling pathway knowledge from sigPathway entries to develop a context mining tool for gene expression analysis – GEOGLE. GEOGLE offers a rapid and convenient way for searching relevant experimental datasets, pathways and biological terms according to multiple types of queries: including biomedical vocabularies, GDS IDs, gene IDs, pathway names and signature list. Moreover, GEOGLE summarizes the signature genes from a subset of GDSes and estimates the correlation between gene expression and the phenotypic distinction with an integrated p value.</p> <p>Conclusion</p> <p>This approach performing global searching of expression data may expand the traditional way of collecting heterogeneous gene expression experiment data. GEOGLE is a novel tool that provides researchers a quantitative way to understand the correlation between gene expression and phenotypic distinction through meta-analysis of gene expression datasets from different experiments, as well as the biological meaning behind. The web site and user guide of GEOGLE are available at: <url>http://omics.biosino.org:14000/kweb/workflow.jsp?id=00020</url></p

    Simplifying Low-Light Image Enhancement Networks with Relative Loss Functions

    Full text link
    Image enhancement is a common technique used to mitigate issues such as severe noise, low brightness, low contrast, and color deviation in low-light images. However, providing an optimal high-light image as a reference for low-light image enhancement tasks is impossible, which makes the learning process more difficult than other image processing tasks. As a result, although several low-light image enhancement methods have been proposed, most of them are either too complex or insufficient in addressing all the issues in low-light images. In this paper, to make the learning easier in low-light image enhancement, we introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions. Specifically, we first recognize the challenges of the need for a large receptive field to obtain global contrast and the lack of an absolute reference, which limits the simplification of network structures in this task. Then, we propose an efficient global feature information extraction component and two loss functions based on relative information to overcome these challenges. Finally, we conducted comparative experiments to demonstrate the effectiveness of the proposed method, and the results confirm that the proposed method can significantly reduce the complexity of supervised low-light image enhancement networks while improving processing effect. The code is available at \url{https://github.com/hitzhangyu/FLW-Net}.Comment: 19 pages, 11 figure

    Integrating 3D City Data through Knowledge Graphs

    Full text link
    CityGML is a widely adopted standard by the Open Geospatial Consortium (OGC) for representing and exchanging 3D city models. The representation of semantic and topological properties in CityGML makes it possible to query such 3D city data to perform analysis in various applications, e.g., security management and emergency response, energy consumption and estimation, and occupancy measurement. However, the potential of querying CityGML data has not been fully exploited. The official GML/XML encoding of CityGML is only intended as an exchange format but is not suitable for query answering. The most common way of dealing with CityGML data is to store them in the 3DCityDB system as relational tables and then query them with the standard SQL query language. Nevertheless, for end users, it remains a challenging task to formulate queries over 3DCityDB directly for their ad-hoc analytical tasks, because there is a gap between the conceptual semantics of CityGML and the relational schema adopted in 3DCityDB. In fact, the semantics of CityGML itself can be modeled as a suitable ontology. The technology of Knowledge Graphs (KGs), where an ontology is at the core, is a good solution to bridge such a gap. Moreover, embracing KGs makes it easier to integrate with other spatial data sources, e.g., OpenStreetMap and existing (Geo)KGs (e.g., Wikidata, DBPedia, and GeoNames), and to perform queries combining information from multiple data sources. In this work, we describe a CityGML KG framework to populate the concepts in the CityGML ontology using declarative mappings to 3DCityDB, thus exposing the CityGML data therein as a KG. To demonstrate the feasibility of our approach, we use CityGML data from the city of Munich as test data and integrate OpenStreeMap data in the same area

    Tree of Life Based on Genome Context Networks

    Get PDF
    Efforts in phylogenomics have greatly improved our understanding of the backbone tree of life. However, due to the systematic error in sequence data, a sequence-based phylogenomic approach leads to well-resolved but statistically significant incongruence. Thus, independent test of current phylogenetic knowledge is required. Here, we have devised a distance-based strategy to reconstruct a highly resolved backbone tree of life, on the basis of the genome context networks of 195 fully sequenced representative species. Along with strongly supporting the monophylies of three superkingdoms and most taxonomic sub-divisions, the derived tree also suggests some intriguing results, such as high G+C gram positive origin of Bacteria, classification of Symbiobacterium thermophilum and Alcanivorax borkumensis in Firmicutes. Furthermore, simulation analyses indicate that addition of more gene relationships with high accuracy can greatly improve the resolution of the phylogenetic tree. Our results demonstrate the feasibility of the reconstruction of highly resolved phylogenetic tree with extensible gene networks across all three domains of life. This strategy also implies that the relationships between the genes (gene network) can define what kind of species it is

    Reverse Engineering of Computer-Based Navy Systems

    Get PDF
    The financial pressure to meet the need for change in computer-based systems through evolution rather than through revolution has spawned the discipline of reengineering. One driving factor of reengineering is that it is increasingly becoming the case that enhanced requirements placed on computer-based systems are overstressing the processing resources of the systems. Thus, the distribution of processing load over highly parallel and distributed hardware architectures has become part of the reengineering process for computer-based Navy systems. This paper presents an intermediate representation (IR) for capturing features of computer-based systems to enable reengineering for concurrency. A novel feature of the IR is that it incorporates the mission critical software architecture, a view that enables information to be captured at five levels of granularity: the element/program level, the task level, the module/class/package level, the method/procedure level, and the statement/instruction level. An approach to reverse engineering is presented, in which the IR is captured, and is analyzed to identify potential concurrency. Thus, the paper defines concurrency metrics to guide the reengineering tasks of identifying, enhancing, and assessing concurrency, and for performing partitioning and assignment. Concurrency metrics are defined at several tiers of the mission critical software architecture. In addition to contributing an approach to reverse engineering for computer-based systems, the paper also discusses a reverse engineering analysis toolset that constructs and displays the IR and the concurrency metrics for Ada programs. Additionally, the paper contains a discussion of the context of our reengineering efforts within the United States Navy, by describing two reengineering projects focused on sussystems of the AEGIS Weapon System
    • …
    corecore